93 research outputs found

    Learning to Generate Unambiguous Spatial Referring Expressions for Real-World Environments

    Full text link
    Referring to objects in a natural and unambiguous manner is crucial for effective human-robot interaction. Previous research on learning-based referring expressions has focused primarily on comprehension tasks, while generating referring expressions is still mostly limited to rule-based methods. In this work, we propose a two-stage approach that relies on deep learning for estimating spatial relations to describe an object naturally and unambiguously with a referring expression. We compare our method to the state of the art algorithm in ambiguous environments (e.g., environments that include very similar objects with similar relationships). We show that our method generates referring expressions that people find to be more accurate (∼\sim30% better) and would prefer to use (∼\sim32% more often).Comment: International Conference on Intelligent Robots and Systems (IROS 2019), Demo 1: Finding the described object (https://youtu.be/BE6-F6chW0w), Demo 2: Referring to the pointed object (https://youtu.be/nmmv6JUpy8M), Supplementary Video (https://youtu.be/sFjBa_MHS98

    A Comparison of Visualisation Methods for Disambiguating Verbal Requests in Human-Robot Interaction

    Full text link
    Picking up objects requested by a human user is a common task in human-robot interaction. When multiple objects match the user's verbal description, the robot needs to clarify which object the user is referring to before executing the action. Previous research has focused on perceiving user's multimodal behaviour to complement verbal commands or minimising the number of follow up questions to reduce task time. In this paper, we propose a system for reference disambiguation based on visualisation and compare three methods to disambiguate natural language instructions. In a controlled experiment with a YuMi robot, we investigated real-time augmentations of the workspace in three conditions -- mixed reality, augmented reality, and a monitor as the baseline -- using objective measures such as time and accuracy, and subjective measures like engagement, immersion, and display interference. Significant differences were found in accuracy and engagement between the conditions, but no differences were found in task time. Despite the higher error rates in the mixed reality condition, participants found that modality more engaging than the other two, but overall showed preference for the augmented reality condition over the monitor and mixed reality conditions

    User Study Exploring the Role of Explanation of Failures by Robots in Human Robot Collaboration Tasks

    Full text link
    Despite great advances in what robots can do, they still experience failures in human-robot collaborative tasks due to high randomness in unstructured human environments. Moreover, a human's unfamiliarity with a robot and its abilities can cause such failures to repeat. This makes the ability to failure explanation very important for a robot. In this work, we describe a user study that incorporated different robotic failures in a human-robot collaboration (HRC) task aimed at filling a shelf. We included different types of failures and repeated occurrences of such failures in a prolonged interaction between humans and robots. The failure resolution involved human intervention in form of human-robot bidirectional handovers. Through such studies, we aim to test different explanation types and explanation progression in the interaction and record humans.Comment: Contributed to the: "The Imperfectly Relatable Robot: An interdisciplinary workshop on the role of failure in HRI", ACM/IEEE International Conference on Human-Robot Interaction HRI 2023. Video can be found at: https://sites.google.com/view/hri-failure-ws/teaser-video

    Using explainability to help children understand Gender Bias in AI

    Get PDF
    The final publication is available at ACM via http://dx.doi.org/10.1145/3459990.3460719Machine learning systems have become ubiquitous into our society. This has raised concerns about the potential discrimination that these systems might exert due to unconscious bias present in the data, for example regarding gender and race. Whilst this issue has been proposed as an essential subject to be included in the new AI curricula for schools, research has shown that it is a difficult topic to grasp by students. We propose an educational platform tailored to raise the awareness of gender bias in supervised learning, with the novelty of using Grad-CAM as an explainability technique that enables the classifier to visually explain its own predictions. Our study demonstrates that preadolescents (N=78, age 10-14) significantly improve their understanding of the concept of bias in terms of gender discrimination, increasing their ability to recognize biased predictions when they interact with the interpretable model, highlighting its suitability for educational programs.Peer ReviewedObjectius de Desenvolupament Sostenible::4 - Educació de Qualitat::4.4 - Per a 2030, augmentar substancialment el nombre de joves i persones adultes que tenen les competències necessàries, en particular tècniques i professionals, per a accedir a l’ocupació, el treball digne i l’emprenedoriaObjectius de Desenvolupament Sostenible::4 - Educació de QualitatPostprint (author's final draft

    2nd Workshop on Evaluating Child Robot Interaction

    Get PDF
    Many researchers have started to explore natural interaction scenarios for children. No matter if these children are normally developing or have special needs, evaluating Child-Robot Interaction (CRI) is a challenge. To find methods that work well and provide reliable data is difficult, for example because commonly used methods such as questionnaires donot work well particularly with younger children. Previous research has shown that children need support in expressing how they feel about technology. Given this, researchers often choose time-consuming behavioral measures from observations to evaluate CRI. However, these are not necessarily comparable between studies and robots. This workshop aims to bring together researchers from differentdisciplines to share their experiences on these aspects. The main topics are methods to evaluate child-robot interaction design, methods to evaluate socially assistive child-robot interaction and multi-modal evaluation of child-robot interaction. Connected questions that we would like to tackle are for example: i) What are reliable metrics in CRI' ii) How can we overcome the pitfalls of survey methods in CRI' iii) How can we integrate qualitative approaches in CRI' iv) What are the best practices for in the wild studies with children? Looking across disciplinary boundaries, we want to discuss advantages and short-comings of using different evaluation methods in order to compile guidelines for future CRI research. This workshop is the second in a series that started at the International Conference on Social Robotics in 2015

    A Systematic Review on Reproducibility in Child-Robot Interaction

    Full text link
    Research reproducibility - i.e., rerunning analyses on original data to replicate the results - is paramount for guaranteeing scientific validity. However, reproducibility is often very challenging, especially in research fields where multi-disciplinary teams are involved, such as child-robot interaction (CRI). This paper presents a systematic review of the last three years (2020-2022) of research in CRI under the lens of reproducibility, by analysing the field for transparency in reporting. Across a total of 325 studies, we found deficiencies in reporting demographics (e.g. age of participants), study design and implementation (e.g. length of interactions), and open data (e.g. maintaining an active code repository). From this analysis, we distill a set of guidelines and provide a checklist to systematically report CRI studies to help and guide research to improve reproducibility in CRI and beyond

    Teachers’ Views on the Use of Empathic Robotic Tutors in the Classroom

    Get PDF
    In this paper, we describe the results of an interview study conducted across several European countries on teachers’ views on the use of empathic robotic tutors in the classroom. The main goals of the study were to elicit teachers’ thoughts on the integration of the robotic tutors in the daily school practice, understanding the main roles that these robots could play and gather teachers’ main concerns about this type of technology. Teachers’ concerns were much related to the fairness of access to the technology, robustness of the robot in students’ hands and disruption of other classroom activities. They saw a role for the tutor in acting as an engaging tool for all, preferably in groups, and gathering information about students’ learning progress without taking over the teachers’ responsibility for the actual assessment. The implications of these results are discussed in relation to teacher acceptance of ubiquitous technologies in general and robots in particular
    • …
    corecore